This project included three parts:
- One-dimensional search methodone(see as dimsearch.py)
- Method of unconstrained optimization problems(see as ucoptim.py)
- Method of constrained optimization problems(see as coptim.py)
-
included 2 methods: GoldenSection Method(0.618 Method), NewtonIteration Method
-
How to use it:
- Define a object function that should be minimized
- Call method function to solve it
-
DEMO
from onedimsearch import * # define a object function def fun(t): return np.arctan(t)*t - 0.5*np.log(t*t + 1) # call method function print(NewtonIter(fun,1)) print(GoldenSection(fun, -1,1))
-
include 2 methods: SteepestDescent Method, ConjugateGradient Method
-
How to use it:
- Define a object function that should be minimized
- Call method function to solve it
-
DEMO
from ucoptim import * # define a object function def fun(x): return x[0]**2 + 25*x[1]**2 # call method function print(ConjugateGradient(fun, [32,2],method='FR')) print(SteepestDescent(fun, [1,2]))
-
include 2 method: ExtenalPenaltyFunction Method(including 3 methods to solve: PSO Method, ConjugateGradient Method, SteepestDescent Method), InternalPenaltyFunction Method(Not be ready yet)
-
How to use it:
- Define a object function that should be minimized and constraints
- Instantiate Extenal/Internal class
- Use class methods to solve it
-
DEMO
from coptim import * # define object function def ori_objfun(x): x = np.array(x) value = 0.0 for each in x: value += each**2 return value # define inequal constraint function def inequal(x): x = np.array(x) value = 0.0 for each in x: value = 1-each return value # use extenal class to solve pf = ExtenalPenaltyFunction(ori_objfun, [inequal],[]) # use extenal.PSO method to solve solution, minvalue = pf.PSO_method(3, [[5], [6], [7]], np.random.rand(3,1), [1,1,1]) print(solution) # use extenal.SteepestDescent method to solve # print(pf.SteepestDescent_method([3]))