Documentation/Modules/ConstraintHandler: Unterschied zwischen den Versionen

Aus OpenDino
Wechseln zu: Navigation, Suche
(References)
(Module Description)
 
(12 dazwischenliegende Versionen desselben Benutzers werden nicht angezeigt)
Zeile 1: Zeile 1:
 +
[[Documentation/Modules | << BACK TO MODULE OVERVIEW <<]]
 +
 
==Summary==
 
==Summary==
  
Zeile 70: Zeile 72:
 
While bjectives are to be minimized (<code>min('''f''')</code>), constraints are to be fulfilled (<code>'''g''' =< 0</code>).  
 
While bjectives are to be minimized (<code>min('''f''')</code>), constraints are to be fulfilled (<code>'''g''' =< 0</code>).  
  
0. no constraint handling
+
'''0. No Constraint Handling'''
  
 
This option turns off the constraint handling. Objective values and constraints are not changed.
 
This option turns off the constraint handling. Objective values and constraints are not changed.
  
1. delete constraints
+
'''1. Delete Constraints'''
  
 
Deletes all constraints. This can be used if the goal is to minimize the objective function(s) only. The objective functions are not changed.
 
Deletes all constraints. This can be used if the goal is to minimize the objective function(s) only. The objective functions are not changed.
  
2. Penalty method
+
'''2. Penalty Method'''
  
All objective function values are summed. All violated constraints (i.e. the constraint value is >= 0) are summed and multiplied with a penalty factor <code>p</code>:  
+
To each objective function, a penalty (larger or equal to zero) is added. All violated constraints (i.e. the constraint value is >= 0) are summed and multiplied with a penalty factor <code>p</code>:  
  
   output = sum<sub>i</sub>(f<sub>i</sub>) + sum<sub>j</sub>(max(g<sub>j</sub>,0))*p
+
   penalized objective i = f<sub>i</sub> + sum<sub>j</sub>(max(g<sub>j</sub>,0))*p
  
 
Increasing the penalty factor <code>p</code> puts more weight on the constraints compared to the objectives.
 
Increasing the penalty factor <code>p</code> puts more weight on the constraints compared to the objectives.
  
3. Penalty method
+
'''3. Stochastic Ranking'''
  
This algorithm works only for population based search methods such as CMA-ES. There are 3 steps
+
Limitations:
 +
# only for single objective problems
 +
# requires population based search methods such as CMA-ES. There are 3 steps
  
  a. All objective function values are summed: sum<sub>i</sub>(f<sub>i</sub>)
+
Rough Description of Main Steps of Stochastic Ranking ([[#1|1]])
  b. All violated constraints (i.e. the constraint value is >= 0) are summed: sum<sub>j</sub>(max(g<sub>j</sub>,0))
+
# For each solution, the penalty is computed as the sum of all violated constraints (i.e. the constraint value is >= 0) <code> p = sum<sub>j</sub>(max(g<sub>j</sub>,0)) </code> as in the Penalty Method.
  c. Ranking of the solutions according to the Stochastic Ranking [[#1|1]]
+
# The solutions of the population are randomly ranked.
 +
# Always two adjacent solutions of the population are compared either based on their objective value or based on their penalty. Typically the comparison according to the penalty should have a higher probability of about 60%. The winner of the comparison obtains the better ranking position.
 +
# The comparision is repeated several times.
  
 
==Usage==
 
==Usage==
 +
 
-
 
-
 +
 
==Source Code==
 
==Source Code==
  
ToDo:Link to SVN
+
https://sourceforge.net/p/opendino/code/HEAD/tree/trunk/src/org/opendino/modules/optim/tools/ConstraintHandler.java
  
 
==References==
 
==References==
[1]<div id="1">1</div> T. P. Runarsson and X. Yao, "Stochastic Ranking for Constrained Evolutionary Optimization", '''IEEE Transactions on Evolutionary Computation''', Vol. 4, No. 3, pp. 284-294, Sep. 2000.
+
(<span id="1">1</span>) T. P. Runarsson and X. Yao, "Stochastic Ranking for Constrained Evolutionary Optimization", ''IEEE Transactions on Evolutionary Computation'', Vol. 4, No. 3, pp. 284-294, Sep. 2000.

Aktuelle Version vom 26. Oktober 2015, 22:00 Uhr

<< BACK TO MODULE OVERVIEW <<

Summary

Some optimization problems define objectives as well as constraints.

For example, one can set for the optimization problem ProblemTruss the weight of the truss to be minimized, while a constraint is set on the maximum stress and displacement of the truss.

This module aggregates objectives and constraints into a single output.

Properties

General

Algorithm deterministic (as no gradient handling is implemented)
Design Variables continuous variables, discrete or mixed variables are possible.
Objectives any number
Constraints any number
Boundaries not affected
Initial Search Region not affected
Typical X not affected
Initialization not required

Connections

Starting at his module One connection of type optimization
Ending at this module One connection of type optimization

Actions

Name Description
- -

Options

The options are currently described as "pop-up help".

Module Description

The result of an evaluation of a solution may contain objective values f and constraint values g:

While bjectives are to be minimized (min(f)), constraints are to be fulfilled (g =< 0).

0. No Constraint Handling

This option turns off the constraint handling. Objective values and constraints are not changed.

1. Delete Constraints

Deletes all constraints. This can be used if the goal is to minimize the objective function(s) only. The objective functions are not changed.

2. Penalty Method

To each objective function, a penalty (larger or equal to zero) is added. All violated constraints (i.e. the constraint value is >= 0) are summed and multiplied with a penalty factor p:

 penalized objective i = fi + sumj(max(gj,0))*p

Increasing the penalty factor p puts more weight on the constraints compared to the objectives.

3. Stochastic Ranking

Limitations:

  1. only for single objective problems
  2. requires population based search methods such as CMA-ES. There are 3 steps

Rough Description of Main Steps of Stochastic Ranking (1)

  1. For each solution, the penalty is computed as the sum of all violated constraints (i.e. the constraint value is >= 0) p = sumj(max(gj,0)) as in the Penalty Method.
  2. The solutions of the population are randomly ranked.
  3. Always two adjacent solutions of the population are compared either based on their objective value or based on their penalty. Typically the comparison according to the penalty should have a higher probability of about 60%. The winner of the comparison obtains the better ranking position.
  4. The comparision is repeated several times.

Usage

-

Source Code

https://sourceforge.net/p/opendino/code/HEAD/tree/trunk/src/org/opendino/modules/optim/tools/ConstraintHandler.java

References

(1) T. P. Runarsson and X. Yao, "Stochastic Ranking for Constrained Evolutionary Optimization", IEEE Transactions on Evolutionary Computation, Vol. 4, No. 3, pp. 284-294, Sep. 2000.