# Documentation/Modules/ConstraintHandler

## Summary

Some optimization problems define objectives as well as constraints.

For example, one can set for the optimization problem `ProblemTruss ` the weight of the truss to be minimized, while a constraint is set on the maximum stress and displacement of the truss.

This module aggregates objectives and constraints into a single output.

## Properties

### General

 Algorithm deterministic (as no gradient handling is implemented) continuous variables, discrete or mixed variables are possible. any number any number not affected not affected not affected not required

### Connections

 Starting at his module One connection of type `optimization` One connection of type `optimization`

Name Description
- -

### Options

The options are currently described as "pop-up help".

## Module Description

The result of an evaluation of a solution may contain objective values f and constraint values g:

While bjectives are to be minimized (`min(f)`), constraints are to be fulfilled (`g =< 0`).

0. No Constraint Handling

This option turns off the constraint handling. Objective values and constraints are not changed.

1. Delete Constraints

Deletes all constraints. This can be used if the goal is to minimize the objective function(s) only. The objective functions are not changed.

2. Penalty Method

To each objective function, a penalty (larger or equal to zero) is added. All violated constraints (i.e. the constraint value is >= 0) are summed and multiplied with a penalty factor `p`:

``` penalized objective i = fi + sumj(max(gj,0))*p
```

Increasing the penalty factor `p` puts more weight on the constraints compared to the objectives.

3. Stochastic Ranking

Limitations:

1. only for single objective problems
2. requires population based search methods such as CMA-ES. There are 3 steps

Rough Description of Main Steps of Stochastic Ranking (1)

1. For each solution, the penalty is computed as the sum of all violated constraints (i.e. the constraint value is >= 0) ` p = sumj(max(gj,0)) ` as in the Penalty Method.
2. The solutions of the population are randomly ranked.
3. Always two adjacent solutions of the population are compared either based on their objective value or based on their penalty. Typically the comparison according to the penalty should have a higher probability of about 60%. The winner of the comparison obtains the better ranking position.
4. The comparision is repeated several times.

-

## References

(1) T. P. Runarsson and X. Yao, "Stochastic Ranking for Constrained Evolutionary Optimization", IEEE Transactions on Evolutionary Computation, Vol. 4, No. 3, pp. 284-294, Sep. 2000.