Documentation/Modules: Unterschied zwischen den Versionen
Aus OpenDino
Dirk (Diskussion | Beiträge) (→Optimization Problems) |
Admin (Diskussion | Beiträge) |
||
(11 dazwischenliegende Versionen von 2 Benutzern werden nicht angezeigt) | |||
Zeile 1: | Zeile 1: | ||
'''Modules''' contain all the functionality for optimizing and learning. One '''Modules''' may contain an optimization algorithm, an artificial neural network, or a problem to optimize. | '''Modules''' contain all the functionality for optimizing and learning. One '''Modules''' may contain an optimization algorithm, an artificial neural network, or a problem to optimize. | ||
− | Here is a list of documented modules in | + | Here is a list of documented modules in OpenDino. Further modules may exist, but may not yet be documented. |
== Single Objective Optimization Algorithms == | == Single Objective Optimization Algorithms == | ||
=== Indirect, Deterministic Algorithms === | === Indirect, Deterministic Algorithms === | ||
+ | Indirect algorithms use gradient or higher order derivative information in the optimization. | ||
+ | |||
Not implemented, yet. | Not implemented, yet. | ||
=== Direct, Deterministic Algorithms === | === Direct, Deterministic Algorithms === | ||
− | * [[Documentation/Modules/ | + | These algorithms neither use gradient information nor stochastic processes. |
+ | |||
+ | * [[Documentation/Modules/OptAlgSimplex | <code>OptAlgSimplex</code>]]: Nelder-Mead Simplex Algorithm | ||
+ | |||
=== Indirect, Stochastic Algorithms === | === Indirect, Stochastic Algorithms === | ||
+ | |||
+ | These algorithms do not use gradient information but require stochastic processes (i.e. random numbers) in their search. | ||
+ | |||
* Evolutionary Algorithms | * Evolutionary Algorithms | ||
− | ** [[Documentation/Modules/OptAlgOpO|<code>OptAlgOpO</code>: 1+1 Evolution Strategy with 1/5 Success Rule | + | ** [[Documentation/Modules/OptAlgOpO|<code>OptAlgOpO</code>]]: 1+1 Evolution Strategy with 1/5 Success Rule: the (1+1)-ES |
− | ** [[Documentation/Modules/OptAlgCMA|<code>OptAlgCMA</code>: A Multi-member Evolution Strategy with Covariance Matrix Adaptation | + | ** [[Documentation/Modules/OptAlgCMA|<code>OptAlgCMA</code>]]: A Multi-member Evolution Strategy with Covariance Matrix Adaptation: the CMA-ES |
== Single and Multi-Objective Optimization Algorithms == | == Single and Multi-Objective Optimization Algorithms == | ||
Zeile 19: | Zeile 27: | ||
=== Indirect, Stochastic Algorithms === | === Indirect, Stochastic Algorithms === | ||
* Evolutionary Algorithms | * Evolutionary Algorithms | ||
− | ** [[Documentation/Modules/OptAlgMoCMA | <code>OptAlgMoCMA</code>: Elitist Evolution Strategy with Covariance Matrix Adaptation | + | ** [[Documentation/Modules/OptAlgMoCMA | <code>OptAlgMoCMA</code>]]: Elitist Evolution Strategy with Covariance Matrix Adaptation |
* Particle Methods | * Particle Methods | ||
− | ** [[Documentation/Modules/OptAlgMOPSO | <code>OptAlgMOPSO</code>: Particle Swarm Optimization Algorithm | + | ** [[Documentation/Modules/OptAlgMOPSO | <code>OptAlgMOPSO</code>]]: Particle Swarm Optimization Algorithm |
== Design of Experiments == | == Design of Experiments == | ||
− | * [[Documentation/Modules/DoePlanner | <code>DoePlanner</code>: A Module Containing Different DoE plans | + | * [[Documentation/Modules/DoePlanner | <code>DoePlanner</code>]]: A Module Containing Different DoE plans |
− | * [[Documentation/Modules/RandomSampling | <code>RandomSampling</code>: Uniform Random Sampling | + | * [[Documentation/Modules/RandomSampling | <code>RandomSampling</code>]]: Uniform Random Sampling (Monte Carlo) |
== Optimization in General == | == Optimization in General == | ||
− | * [[Documentation/Modules/BoundHandler | <code>BoundHandler</code> For Optimization Algorithms | + | * [[Documentation/Modules/BoundHandler | <code>BoundHandler</code>]]: For Optimization Algorithms without Bound Handling |
− | * [[Documentation/Modules/ | + | * [[Documentation/Modules/ConstraintHandler | <code>ConstraintHandler </code>]]: For Optimization Algorithms without Constraint Handling |
− | |||
== Optimization Problems == | == Optimization Problems == | ||
− | * [[Documentation/Modules/ProblemSimple | <code>ProblemSimple</code> A Simple Test Problem | + | * [[Documentation/Modules/ProblemSimple | <code>ProblemSimple</code>]] A Simple Test Problem |
− | * [[Documentation/Modules/ContinuousTestProblems | <code>ContinuousTestProblems </code>: A Set of Single Objective Test Problem | + | * [[Documentation/Modules/ContinuousTestProblems | <code>ContinuousTestProblems </code>]]: A Set of Single Objective Test Problem |
− | * [[Documentation/Modules/ContinuousMOTestProblems | <code>ContinuousMOTestProblems </code>: A Set of Multi-Objective Test Problem]] | + | * [[Documentation/Modules/ContinuousMOTestProblems | <code>ContinuousMOTestProblems </code>]]: A Set of Multi-Objective Test Problem |
+ | |||
+ | * [[Documentation/Modules/ProblemTruss | <code>ProblemTruss </code>]]: The goal is to opzimize the thickness of 10 trusses. The weight of the truess, maximum stress, and displacement can each be set either as objective or constraint. | ||
== Machine Learning == | == Machine Learning == | ||
− | * [[Documentation/Modules/NeuralNetwork | <code>NeuralNetwork</code>: Artificial Neural Network | + | * [[Documentation/Modules/NeuralNetwork | <code>NeuralNetwork</code>]]: Artificial Neural Network |
== Miscellaneous Modules == | == Miscellaneous Modules == | ||
− | * [[Documentation/Modules/SurrogateManager | <code>SurrogateManager</code>: A Framework for Surrogate Managing | + | * [[Documentation/Modules/SurrogateManager | <code>SurrogateManager</code>]]: A Framework for Surrogate Managing |
Aktuelle Version vom 24. April 2019, 22:04 Uhr
Modules contain all the functionality for optimizing and learning. One Modules may contain an optimization algorithm, an artificial neural network, or a problem to optimize.
Here is a list of documented modules in OpenDino. Further modules may exist, but may not yet be documented.
Inhaltsverzeichnis
Single Objective Optimization Algorithms
Indirect, Deterministic Algorithms
Indirect algorithms use gradient or higher order derivative information in the optimization.
Not implemented, yet.
Direct, Deterministic Algorithms
These algorithms neither use gradient information nor stochastic processes.
-
OptAlgSimplex
: Nelder-Mead Simplex Algorithm
Indirect, Stochastic Algorithms
These algorithms do not use gradient information but require stochastic processes (i.e. random numbers) in their search.
- Evolutionary Algorithms
Single and Multi-Objective Optimization Algorithms
Indirect, Stochastic Algorithms
- Evolutionary Algorithms
-
OptAlgMoCMA
: Elitist Evolution Strategy with Covariance Matrix Adaptation
-
- Particle Methods
-
OptAlgMOPSO
: Particle Swarm Optimization Algorithm
-
Design of Experiments
-
DoePlanner
: A Module Containing Different DoE plans -
RandomSampling
: Uniform Random Sampling (Monte Carlo)
Optimization in General
-
BoundHandler
: For Optimization Algorithms without Bound Handling -
ConstraintHandler
: For Optimization Algorithms without Constraint Handling
Optimization Problems
-
ProblemSimple
A Simple Test Problem -
ContinuousTestProblems
: A Set of Single Objective Test Problem -
ContinuousMOTestProblems
: A Set of Multi-Objective Test Problem
-
ProblemTruss
: The goal is to opzimize the thickness of 10 trusses. The weight of the truess, maximum stress, and displacement can each be set either as objective or constraint.
Machine Learning
-
NeuralNetwork
: Artificial Neural Network
Miscellaneous Modules
-
SurrogateManager
: A Framework for Surrogate Managing